108 research outputs found

    Dot-to-Dot: Explainable Hierarchical Reinforcement Learning for Robotic Manipulation

    Full text link
    Robotic systems are ever more capable of automation and fulfilment of complex tasks, particularly with reliance on recent advances in intelligent systems, deep learning and artificial intelligence. However, as robots and humans come closer in their interactions, the matter of interpretability, or explainability of robot decision-making processes for the human grows in importance. A successful interaction and collaboration will only take place through mutual understanding of underlying representations of the environment and the task at hand. This is currently a challenge in deep learning systems. We present a hierarchical deep reinforcement learning system, consisting of a low-level agent handling the large actions/states space of a robotic system efficiently, by following the directives of a high-level agent which is learning the high-level dynamics of the environment and task. This high-level agent forms a representation of the world and task at hand that is interpretable for a human operator. The method, which we call Dot-to-Dot, is tested on a MuJoCo-based model of the Fetch Robotics Manipulator, as well as a Shadow Hand, to test its performance. Results show efficient learning of complex actions/states spaces by the low-level agent, and an interpretable representation of the task and decision-making process learned by the high-level agent

    FastOrient: Lightweight Computer Vision for Wrist Control in Assistive Robotic Grasping

    Get PDF
    Wearable and Assistive robotics for human grasp support are broadly either tele-operated robotic arms or act through orthotic control of a paralyzed user's hand. Such devices require correct orientation for successful and efficient grasping. In many human-robot assistive settings, the end-user is required to explicitly control the many degrees of freedom making effective or efficient control problematic. Here we are demonstrating the off-loading of low-level control of assistive robotics and active orthotics, through automatic end-effector orientation control for grasping. This paper describes a compact algorithm implementing fast computer vision techniques to obtain the orientation of the target object to be grasped, by segmenting the images acquired with a camera positioned on top of the end-effector of the robotic device. The rotation needed that optimises grasping is directly computed from the object's orientation. The algorithm has been evaluated in 6 different scene backgrounds and end-effector approaches to 26 different objects. 94.8% of the objects were detected in all backgrounds. Grasping of the object was achieved in 91.1% of the cases and has been evaluated with a robot simulator confirming the performance of the algorithm

    Gaze-based, context-aware robotic system for assisted reaching and grasping

    Get PDF
    Assistive robotic systems endeavour to support those with movement disabilities, enabling them to move again and regain functionality. Main issue with these systems is the complexity of their low-level control, and how to translate this to simpler, higher level commands that are easy and intuitive for a human user to interact with. We have created a multi-modal system, consisting of different sensing, decision making and actuating modalities, leading to intuitive, human-in-the-loop assistive robotics. The system takes its cue from the user's gaze, to decode their intentions and implement low-level motion actions to achieve high-level tasks. This results in the user simply having to look at the objects of interest, for the robotic system to assist them in reaching for those objects, grasping them, and using them to interact with other objects. We present our method for 3D gaze estimation, and grammars-based implementation of sequences of action with the robotic system. The 3D gaze estimation is evaluated with 8 subjects, showing an overall accuracy of 4.68±0.14cm4.68\pm0.14cm. The full system is tested with 5 subjects, showing successful implementation of 100%100\% of reach to gaze point actions and full implementation of pick and place tasks in 96\%, and pick and pour tasks in 76%76\% of cases. Finally we present a discussion on our results and what future work is needed to improve the system

    Neuromuscular Reinforcement Learning to Actuate Human Limbs through FES

    Full text link
    Functional Electrical Stimulation (FES) is a technique to evoke muscle contraction through low-energy electrical signals. FES can animate paralysed limbs. Yet, an open challenge remains on how to apply FES to achieve desired movements. This challenge is accentuated by the complexities of human bodies and the non-stationarities of the muscles' responses. The former causes difficulties in performing inverse dynamics, and the latter causes control performance to degrade over extended periods of use. Here, we engage the challenge via a data-driven approach. Specifically, we learn to control FES through Reinforcement Learning (RL) which can automatically customise the stimulation for the patients. However, RL typically has Markovian assumptions while FES control systems are non-Markovian because of the non-stationarities. To deal with this problem, we use a recurrent neural network to create Markovian state representations. We cast FES controls into RL problems and train RL agents to control FES in different settings in both simulations and the real world. The results show that our RL controllers can maintain control performances over long periods and have better stimulation characteristics than PID controllers.Comment: Accepted manuscript IFESS 2022 (RehabWeek 2022

    The Supernumerary Robotic 3rd Thumb for Skilled Music Tasks

    Get PDF
    Wearable robotics bring the opportunity to augment human capability and performance, be it through prosthetics, exoskeletons, or supernumerary robotic limbs. The latter concept allows enhancing human performance and assisting them in daily tasks. An important research question is, however, whether the use of such devices can lead to their eventual cognitive embodiment, allowing the user to adapt to them and use them seamlessly as any other limb of their own. This paper describes the creation of a platform to investigate this. Our supernumerary robotic 3rd thumb was created to augment piano playing, allowing a pianist to press piano keys beyond their natural hand-span; thus leading to functional augmentation of their skills and the technical feasibility to play with 11 fingers. The robotic finger employs sensors, motors, and a human interfacing algorithm to control its movement in real-time. A proof of concept validation experiment has been conducted to show the effectiveness of the robotic finger in playing musical pieces on a grand piano, showing that naive users were able to use it for 11 finger play within a few hours

    Embroidered Electromyography: A Systematic Design Guide

    Get PDF
    • …
    corecore